Algorithm analysis

Let us take a big picture approach when analyzing the running time of an algorithm
It will be sufficient to know that the running time grows proportionally to n

If the number of primitive operations: an + b is O(n)

- Formal definition: 
f(n) is O(g(n)) if there exists some positive real constant c > 0 and some positive integer 
value n_0 >= 1 such that : f(n) <= c * g(n) for n >= n_0

Example: prove that 8n - 2 is O(n)

8n -2 <= c * n for n >= n_0

c = 8 and n_0 = 1

n is O(n^2)

- We characterize a function using the Big-Oh notation as closely possible

8n is O(n^4) but it is better to say that 8n is O(n)

- We characterize a function using the Big-Oh notation in the simplest terms

n^2 is O(4n^2 + 2n + 5) but it is better to say that n^2 is O(n^2)

- Tool: desmos.com

- 7 functions:
We want the operations of a data structure to have a running time
1) Constant ---> O(1)
2) Logarithmic ---> O(log2(n))

You want your algorithm to have a running time
3) Linear ---> O(n)
4) n log2(n) ---> O(n log2(n))

Less practical
5) quadratic ---> O(n^2)
6) Cubic ---> O(n^3)

Infeasible
7) Exponential ---> O(2^n)

- Big Oh <=> Less than or equal
Big Omega <=> greater than or equal - n^2 is Big Omega of n
Big Theta <=> Same growth rate
if we can prove that f(n) is O(g(n)) and Big Omega of g(n) ==> f(n) is Big Theta of g(n)

- If we cannot figure out the relationship visually for f(n) and g(n)
Take limit as n goes to infinity of the ratio between the two functions (L'Hoptial's rule)

log_e(n) is O(sqrt{n})

- 2^n vs n^{1000000} => n^{1....} is O(2^n)

- Analyze the running time of iterative solutions
Procedure: 
1. Identify the bottleneck section of the implementation
2. Count the number of times this bottleneck section will be executed

- Analyze the running time of recursive solutions

factorial(n): 
	if n == 1 return 1
	return n * factorial(n-1)

Let us denote by T(n) the running time of this algorithm when the input is n
T(n-1) is the running time when the input is n - 1

Step#1: Come up with a recurrence relation between T(n) and T(n-1)

T(n) = 1 + T(n-1)

Step#2: Consider the base case
T(1) = 1

Step#3: Expand the recurrence relation out

T(n) = T(n-1) + 1 = T(n-2) + 2 = T(n-3) + 3 ...k times ... T(n) = T(n-k) + k

n - k = 1 => k = n - 1; for k = n - 1 => T(n) = T(1) + n - 1 = 1 + n - 1 = n is O(n)

T(n-1) = T(n-2) + 1
T(n-2) = T(n-3) + 1

- Recursive Binary search

BSRec(arr, target, left, right):
	mid = (left + right) / 2
	if arr[mid] == target: 
		return mid
	else if arr[mid] < target:
		return BSRec(arr, target, mid + 1, right)
	else:
		return BSRec(arr, target, left, mid - 1)

n is the size of search scope
Step#1: Recurrence relation
T(n) = 1 + T(n/2)

Step#2: Base case
T(1) = 1


Step#3: Expansion
T(n) = T(n / 2) + 1 = T(n/2^2) + 2 = T(n/2^3) + 3 ... k times: T(n) = T(n/2^k) + k
n / 2^k = 1 => k = log2(n); For k = log_2(n); T(n) = 1 + log2(n) is O(log2(n))

T(n/2) = T(n/4) + 1
T(n/4) = T(n/8) + 1


- T(n) = T(n/2) + n
T(1) = 1

Big Oh characterization of T(n)?














































































 








